A social robot is an autonomous robot that interacts and communicates with humans or other autonomous physical agents by following social behaviors and rules attached to its role. This definition suggests that a social robot must have a physical embodiment (screen characters would be excluded). Recently some robots have been developed that use a screen to display the robot's head. Such a machine is on the borderline of being a robot. If the body only functions as a holder for the screen then such a system cannot be considered a robot but if the robot has some physical motor and sensor abilities then such a system could be considered a robot.
Contents |
The field of social robotics was started in the 1940s-50s by William Grey Walter, and developed since the early 1990s by artificial intelligence researchers, most notably, Kerstin Dautenhahn, as well as Maja Mataric, Cynthia Breazeal, Aude Billard, Yiannis Demiris, and Brian Duffy. Also related is Kansai engineering movement in Japanese science and technology --- for social robotics, see especially works by Yoshihiro Miyake, Tomio Watanabe, Hideki Kozima, Takayuki Kanda, Hiroshi Ishiguro.
Autonomy is a requirement for a social robot. A completely remote controlled robot cannot be considered to be social since it does not make decisions by itself. It is merely an extension of another human. This does not mean that the robot must be completely autonomous. A semi-autonomy still appears to be acceptable.
The definition states that a social robot should communicate and interact with humans and embodied agents. These are likely to be cooperative, but the definition is not limited to this situation. Moreover, uncooperative behavior can be considered social in certain situations. The robot could, for example, exhibit competitive behavior within the framework of a game. The robot could also interact with a minimum or no communication. It could, for example, hand tools to an astronaut working on a space station. However, it is likely that some communication will be necessary at some point. Two suggested ultimate requirements for social robots are the Turing Test to determine the robot's communication skills and Isaac Asimov's Three Laws of Robotics for its behavior (The usefulness to apply these requirements in a real-world application, especially in the case of Asimov's laws, still is disputed and maybe not possible at all). However, a consequence of this viewpoint is that a robot that only interacts and communicates with other robots would not be considered to be a social robot: Being social is bound to humans and their society which defines necessary social values, norms and standards. This results in a cultural dependency of social robots since social values, norms and standards differ between cultures.
This brings us directly to the last part of the definition. A social robot must interact within the social rules attached to its role. The role and its rules are defined through society. For example a robotic butler for humans would have to comply with established rules of good service. It should be anticipating, reliable and most of all discreet. A social robot must be aware of this and comply with it. However, social robots that interact only with other autonomous robots would behave and interact according to non-human conventions.